Many scientific domains gather sufficient labels to train machine algorithms through human-in-the-loop techniques provided by the Zooniverse.org citizen science platform. As the range of projects, task types and data rates increase, acceleration of model training is of paramount concern to focus volunteer effort where most needed. The application of Transfer Learning (TL) between Zooniverse projects holds promise as a solution. However, understanding the effectiveness of TL approaches that pretrain on large-scale generic image sets vs. images with similar characteristics possibly from similar tasks is an open challenge. We apply a generative segmentation model on two Zooniverse project-based data sets: (1) to identify fat droplets in liver cells (FatChecker; FC) and (2) the identification of kelp beds in satellite images (Floating Forests; FF) through transfer learning from the first project. We compare and contrast its performance with a TL model based on the COCO image set, and subsequently with baseline counterparts. We find that both the FC and COCO TL models perform better than the baseline cases when using >75% of the original training sample size. The COCO-based TL model generally performs better than the FC-based one, likely due to its generalized features. Our investigations provide important insights into usage of TL approaches on multi-domain data hosted across different Zooniverse projects, enabling future projects to accelerate task completion.
translated by 谷歌翻译
天文学家通常已经着手通过从头开始创建自己的表示来解决监督的机器学习问题。我们表明,经过训练的深度学习模型,可以回答每个星系动物园贴花问题问题,即学习星系的有意义的语义表示,这些语义表示对于从未训练过的新任务很有用。我们利用这些表示形式优于最近对研究大型星系样本至关重要的实际任务的方法。第一个任务是识别与查询星系相似的形态的星系。给定一个星系为人类分配了一个免费文本标签(例如“ #diffuse”),我们可以找到与大多数标签匹配该标签的星系。第二个任务是确定特定研究人员最有趣的异常。我们的方法在识别最有趣的100个异常(由Galaxy Zoo 2志愿者判断)方面是100%准确的。第三个任务是调整模型来仅使用少数新标记的星系解决新任务。与从陆地图像(ImageNet)或从头开始训练的模型相比,从我们的表示形式进行微调的模型可以更好地识别环形星系。我们用很少的新标签解决每个任务;一个(用于相似性搜索)或数百个(用于异常检测或微调)。这挑战了长期以来的观点,即深度监督方法需要新的大型标签数据集,以便在天文学中实际使用。为了帮助社区受益于我们验证的模型,我们发布了我们的微调代码Zoobot。没有先前经验的研究人员可以访问Zoobot。
translated by 谷歌翻译
我们介绍了Galaxy动物园贴花:SDSS DR8占地面积的星系中的黑色能量相机传统调查图像的详细视觉形态学分类。更深的贴花图像(R = 23.6与SDSS的r = 22.2)显示螺旋臂,弱杆和在SDSS成像中未见的潮汐功能。为了最佳利用较大的贴花图像,志愿者从一套新的答案中选择,旨在提高对合并和酒吧的敏感性。 Galaxy动物园志愿者提供750万个单独的分类超过314,000个星系。 140,000个星系收到至少30分类,足以准确测量像条状的详细的形态,其余的收到约5.所有分类都用于培训贝叶斯卷积神经网络的集合(一种最先进的深度学习方法)预测所有314,000个星系的详细形态的后海外。当衡量自信的志愿者分类时,每个问题的网络大约有99%。形态学是每个星系的基本特征;我们的人机和机器分类是理解星系如何发展的准确和详细资源。
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
Scholarly text is often laden with jargon, or specialized language that divides disciplines. We extend past work that characterizes science at the level of word types, by using BERT-based word sense induction to find additional words that are widespread but overloaded with different uses across fields. We define scholarly jargon as discipline-specific word types and senses, and estimate its prevalence across hundreds of fields using interpretable, information-theoretic metrics. We demonstrate the utility of our approach for science of science and computational sociolinguistics by highlighting two key social implications. First, we measure audience design, and find that most fields reduce jargon when publishing in general-purpose journals, but some do so more than others. Second, though jargon has varying correlation with articles' citation rates within fields, it nearly always impedes interdisciplinary impact. Broadly, our measurements can inform ways in which language could be revised to serve as a bridge rather than a barrier in science.
translated by 谷歌翻译
Science tests competing theories or models by evaluating the similarity of their predictions against observational experience. Thus, how we measure similarity fundamentally determines what we learn. In machine learning and scientific modeling, similarity metrics are used as objective functions. A classic example being mean squared error, which is the optimal measure of similarity when errors are normally distributed and independent and identically distributed (iid). In many cases, however, the error distribution is neither normal nor iid, so it is left to the scientist to determine an appropriate objective. Here, we review how information theory can guide that selection, then demonstrate the approach with a simple hydrologic model.
translated by 谷歌翻译
Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
translated by 谷歌翻译
Wildfires are a common problem in many areas of the world with often catastrophic consequences. A number of systems have been created to provide early warnings of wildfires, including those that use satellite data to detect fires. The increased availability of small satellites, such as CubeSats, allows the wildfire detection response time to be reduced by deploying constellations of multiple satellites over regions of interest. By using machine learned components on-board the satellites, constraints which limit the amount of data that can be processed and sent back to ground stations can be overcome. There are hazards associated with wildfire alert systems, such as failing to detect the presence of a wildfire, or detecting a wildfire in the incorrect location. It is therefore necessary to be able to create a safety assurance case for the wildfire alert ML component that demonstrates it is sufficiently safe for use. This paper describes in detail how a safety assurance case for an ML wildfire alert system is created. This represents the first fully developed safety case for an ML component containing explicit argument and evidence as to the safety of the machine learning.
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
我们介绍了一种新的图像取证方法:将物理折射物(我们称为图腾)放入场景中,以保护该场景拍摄的任何照片。图腾弯曲并重定向光线,因此在单个图像中提供了多个(尽管扭曲)的多个(尽管扭曲)。防守者可以使用这些扭曲的图腾像素来检测是否已操纵图像。我们的方法通过估计场景中的位置并使用其已知的几何和材料特性来估算其位置,从而使光线通过图腾的光线不十障。为了验证图腾保护的图像,我们从图腾视点重建的场景与场景的外观从相机的角度来检测到不一致之处。这样的方法使对抗性操纵任务更加困难,因为对手必须以几何一致的方式对图腾和图像像素进行修改,而又不知道图腾的物理特性。与先前的基于学习的方法不同,我们的方法不需要在特定操作的数据集上进行培训,而是使用场景和相机的物理属性来解决取证问题。
translated by 谷歌翻译